103 research outputs found

    Physics-Based Modeling, Analysis and Animation

    Get PDF
    The idea of using physics-based models has received considerable interest in computer graphics and computer vision research the last ten years. The interest arises from the fact that simple geometric primitives cannot accurately represent natural objects. In computer graphics physics-based models are used to generate and visualize constrained shapes, motions of rigid and nonrigid objects and object interactions with the environment for the purposes of animation. On the other hand, in computer vision, the method applies to complex 3-D shape representation, shape reconstruction and motion estimation. In this paper we review two models that have been used in computer graphics and two models that apply to both areas. In the area of computer graphics, Miller [48] uses a mass-spring model to animate three forms of locomotion of snakes and worms. To overcome the problem of the multitude of degrees of freedom associated with the mass-spring lattices, Witkin and Welch [87] present a geometric method to model global deformations. To achieve the same result Pentland and Horowitz in [54] delineate the object motion into rigid and nonrigid deformation modes. To overcome problems of these two last approaches, Metaxas and Terzopoulos in [45] successfully combine local deformations with global ones. Modeling based on physical principles is a potent technique for computer graphics and computer vision. It is a rich and fruitful area for research in terms of both theory and applications. It is important, though, to develop concepts, methodologies, and techniques which will be widely applicable to many types of applications

    End-to-end 3D face reconstruction with deep neural networks

    Full text link
    Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.Comment: Accepted to CVPR1
    • …
    corecore